Synchronization in a Thread-Pool Model and its Application in Parallel Computing
نویسندگان
چکیده
In this paper, we consider synchronization in a thread-pool model and its application in scientific parallel computing. Consider a server system which processes requests from clients, where requests are either to query or to update a unique shared variable. The server receives a request and passes the request to a job thread. The job thread processes the request and sends a reply back to the client. There are two types of concurrent models to implement such a system: a thread-per-request model and a thread-pool model[2]. In the thread-per-request model, every incoming request causes a new job thread to be spawned to process it. In the thread-pool model, the system maintains a pool (or pools) of pre-spawned job threads. When a new request arrives, the request is passed to a free job thread in the pool. After the request is processed, the thread is returned to the pool to execute another request. The advantages of the thread-pool model include that the system can have a bound on the number of threads and reduce the cost of thread creation. The thread-pool model is often used in embedded systems and web servers. We have developed a synchronization methodology well-suited for the thread-pool model. Synchronization has been well-studied in operating systems and parallel computing. Almost all such synchronization codes rely on Operating System (OS) primitives that block executing threads for synchronization. Such solutions are valid for the thread-per-request model. However, in the thread-pool model where threads are a limited resource, often such solutions are not efficient or even not acceptable. For example, consider the producers/consumers with a bounded buffer problem found in many textbooks[1]. Suppose that there are 10 buffer entries and that the thread pool maintains less than 10 threads. Then, if the first 10 requests are from consumers, the system becomes deadlocked. In our approach, when an execution needs to be blocked, synchronization code releases and returns the executing thread to the thread pool, rather than blocking the thread, so that the thread can execute another job. This is done by returning from one function and calling another function. As a result, switching from one job to another is done extremely fast. We applied our synchronization approach to grid-computation and obtained good results. The results show that our thread-pool model synchronization approach can speed up many applications, such as web servers and embedded systems.
منابع مشابه
Dipp: an Uniform Programming Model for Shared and Distributed Parallel Programming
We present a programming model which ooers ease of parallel application programming for both shared variables and distributed programming. The model is more abstract than distributed models such as PVM, MPI and Split C and shared models such as thread packages or the ANL Parmacs. It improves on the property of memory-model-independence of other models such as CC++ and Nexus and ooers a remarkab...
متن کاملA Case for Synchronous Objects in Component-Bound Architectures
This paper assesses the concept of synchronous objects, and shows that these objects make it possible to treat most facets of the development of interactive/distributed applications (e.g., multi-threading, GUI integration, remote accesses, inheritance anomaly, modeling and performance) very efficiently. A synchronous object is similar to a common object, but it can execute a special method on a...
متن کاملAdaptive Performance Optimization under Power Constraint in Multi-thread Applications with Diverse Scalability
In modern data centers, energy usage represents one of the major factors affecting operational costs. Power capping is a technique that limits the power consumption of individual systems, which allows reducing the overall power demand at both cluster and data center levels. However, literature power capping approaches do not fit well the nature of important applications based on first-class mul...
متن کاملA New Relaxed Memory Consistency Model for Shared-Memory Multiprocessors with Parallel-Multithreaded Processing Elements
The release consistency model is the generally accepted hardware-centric relaxed memory consistency model because of its performance and implementation complexity. By extending the release consistency model, in this paper, we propose a hardware-centric memory consistency model particularly for shared-memory multiprocessor systems with parallel-multithreaded processing elements. The new model us...
متن کاملEnhancements of DAVRID for Massively Parallel Processing
Massively Parallel Processing J. H. Kim and S. Y. Han D. J. Hwang Dept. of Computer Science Dept. of Information Engineering Seoul National University, Korea Sungkyunkwan University, Korea E-mail: [email protected] E-mail: [email protected] H. H. Kim S. H. Cho Dept. of Computer Science Dept. of Computer Science Seowon University, Korea Kangnam University, Korea Abstract This pap...
متن کامل